Goto

Collaborating Authors

 Leesburg


Solving the Traveling Salesman Problem via Different Quantum Computing Architectures

Padmasola, Venkat, Li, Zhaotong, Chatterjee, Rupak, Dyk, Wesley

arXiv.org Artificial Intelligence

We study the application of emerging photonic and quantum computing architectures to solving the Traveling Salesman Problem (TSP), a well-known NP-hard optimization problem. We investigate several approaches: Simulated Annealing (SA), Quadratic Unconstrained Binary Optimization (QUBO-Ising) methods implemented on quantum annealers and Optical Coherent Ising Machines, as well as the Quantum Approximate Optimization Algorithm (QAOA) and the Quantum Phase Estimation (QPE) algorithm on gate-based quantum computers. QAOA and QPE were tested on the IBM Quantum platform. The QUBO-Ising method was explored using the D-Wave quantum annealer, which operates on superconducting Josephson junctions, and the QCI Dirac machine, a nonlinear optoelectronic Ising machine. Gate-based quantum computers demonstrated accurate results for small TSP instances in simulation. However, real quantum devices are hindered by noise and limited scalability. Circuit complexity grows with problem size, restricting performance to TSP instances with a maximum of 6 nodes. In contrast, Ising-based architectures show improved scalability for larger problem sizes. SQUID-based Ising machines can handle TSP instances with up to 12 nodes, while nonlinear optoelectronic Ising machines extend this capability to 18 nodes. Nevertheless, the solutions tend to be suboptimal due to hardware limitations and challenges in achieving ground state convergence as the problem size increases. Despite these limitations, Ising machines demonstrate significant time advantages over classical methods, making them a promising candidate for solving larger-scale TSPs efficiently.


Can LLMs plan paths in the real world?

Chen, Wanyi, Su, Meng-Wen, Mehjabin, Nafisa, Cummings, Mary L.

arXiv.org Artificial Intelligence

In addition, researchers have explored how LLMs can be In early 2024, Volkswagen premiered the first vehicle with used in vision-and-language navigation (VLN). In robotics, ChatGPT integrated into its voice assistant (Volkswagen VLN involves giving robots or agents verbal instructions on 2024). Volkswagen claimed that its ChatGPT-enabled voice how to navigate using visual cues and landmarks (Schumann assistant could be used to control the infotainment, navigation, et al. 2024). Challenges of VLN include visual and natural and air conditioning, or to answer general knowledge language understanding, as well as spatial and temporal questions (Volkswagen 2024).


Mosaic Data Science Combats Climate Change & Accelerates ESG Efforts With Custom Artificial Intelligence & Machine Learning Solutions

#artificialintelligence

LEESBURG, Va., Jan. 09, 2023 (GLOBE NEWSWIRE) -- Mosaic Data Science contributed machine learning algorithm development & deployment services to help a leading power firm automate the process of quantifying the switch to renewable energy portfolios from traditional energy sources while exploring the costs and tradeoffs of said offerings for their business-to-business customers. The solution is designed for enterprises that require power to a diverse set of business functions, such as industrial warehouses, production plants, and related physical infrastructure. The application relies on a highly scalable, custom mathematical optimization algorithm to select the products to eliminate or offset the emissions required to reach the GHG targets. Mosaic's data scientists collaborated with key stakeholders to lay out requirements for an interactive dashboard and the algorithms driving the portfolio recommendations. In the past, this had been a manual, error-prone, and time-consuming effort as sales personnel had to piece together a portfolio to cover energy usage across tens of thousands of service locations for a customer over a multi-decade window.


Machine learning models predict hepatocellular carcinoma treatment response

#artificialintelligence

Leesburg, VA, August 17, 2022--According to ARRS' American Journal of Roentgenology (AJR), machine learning models applied to presently underutilized imaging features could help construct more reliable criteria for organ allocation and liver transplant eligibility. "The findings suggest that machine learning-based models can predict recurrence before therapy allocation in patients with early-stage hepatocellular carcinoma (HCC) initially eligible for liver transplant," wrote corresponding author Julius Chapiro from the department of radiology and biomedical imaging at Yale University School of Medicine in New Haven, CT. Chapiro and colleagues' proof-of-concept study included 120 patients (88 men, 32 women; median age, 60 years) diagnosed with early-stage HCC between June 2005 and March 2018, who were initially eligible for liver transplant and underwent treatment by transplant, resection, or thermal ablation. Patients underwent pretreatment MRI and posttreatment imaging surveillance, and imaging features were extracted from postcontrast phases of pretreatment MRI examinations using a pretrained convolutional neural network (VGG-16). Pretreatment clinical characteristics (including laboratory data) and extracted imaging features were integrated to develop three ML models--clinical, imaging, combined--for recurrence prediction within 1–6 years posttreatment. Ultimately, all three models predicted posttreatment recurrence for early-stage HCC from pretreatment clinical (AUC 0.60–0.78,


DataPrime

#artificialintelligence

Our AI-driven microservices seamlessly connect to your existing infrastructure for AI-powered insights faster than any other solution on the market.


Deep learning, subtraction technique optimal for coronary stent evaluation by CTA

#artificialintelligence

Both readers provided a diagnosis of in-stent restenosis only for subtraction HIR and subtraction DLR. Diagnostic confidence score for the four methods for reader 1 was 2, 2, 3, and 4, respectively, and for reader 2 was 1, 2, 3, and 3, respectively. Patient subsequently underwent invasive catheter angiography. Fluoroscopic imagines obtained (E) before and (F) after contrast media injection demonstrate in-stent restenosis of proximal aspect of stent (arrow). Leesburg, VA, August 10, 2022--According to ARRS' American Journal of Roentgenology (AJR), the combination of deep-learning reconstruction (DLR) and a subtraction technique yielded optimal diagnostic performance for the detection of in-stent restenosis by coronary CTA.


MRI analysis with machine learning predicts impairment after spinal injury, study shows

#artificialintelligence

Leesburg, VA, April 2, 2018 - A test of machine-learning algorithms shows promise for computer-aided prognosis of acute spinal cord injury, according to a study to be presented at the ARRS 2018 Annual Meeting, set for April 22-27 in Washington, DC. The study to be presented by Jason Talbot, assistant professor of radiology at the University of California, San Francisco, involved using semiautomated image analysis with machine-learning algorithms to assess the accuracy of axial T2-weighted radiomic features for classifying patients by degree of neurologic injury. Several machine-learning algorithms were tested for injury classification based on texture variables. For each trained model, the accuracy of predicting the testing set was recorded, as were variables important to the model. This proof-of-principle study highlights the feasibility of applying a semiautomated MRI analysis pipeline for atlas-based texture feature extraction from T2-weighted MRI at the epicenter of acute spinal cord injury (SCI).


Allen Newell: A Remembrance

Haberman, Nico

AI Magazine

I met Allen for the first time when I came for a two semester long visit to Carnegie Mellon University in 1968. This encounter was a distinct factor in my later decision to join the faculty at Carnegie Mellon University.


Guest Editorial

Clancey, William J.

AI Magazine

Good books, well conceived, well written, and well presented, can do much to promote the science of AI and the AAAI organization. The AAAI Press edited collections, from which the articles of this issue are excerpted, are designed to reach out to an audience that wants to learn more about AAAI and AI.


NON-VON's applicability to three AI task areas

Shaw, D. E.

Classics

NON-VON is a massively parallel machine constructed using custom VLSI chips, each containing a number of simple processing elements A preliminary prototype is now operational at Columbia University The machine is intended to provide highly efficient support for a wide range of artificial intelligence and other symbolic applications This paper briefly describes the current version of the NON-VON machine and presents evidence for its applicability to the execution of OPS5 production systems, a number of low-and intermediate-level computer vision tasks, and certain "difficult" relational algebraic operations relevant to knowledge base management Analytic and simulation results are presented for a number of algorithms The data suggest that NON-VON could provide a performance improvement of as much as two to three orders of magnitude over a conventional sequential machine for a wide range of AI tasks